Bilevel optimization plays an essential role in many machine learning tasks, ranging from hyperparameter optimization to meta-learning. Existing studies on bilevel optimization, however, focus on either centralized or synchronous distributed setting. The centralized bilevel optimization approaches require collecting massive amount of data to a single server, which inevitably incur significant communication expenses and may give rise to data privacy risks. Synchronous distributed bilevel optimization algorithms, on the other hand, often face the straggler problem and will immediately stop working if a few workers fail to respond. As a remedy, we propose Asynchronous Distributed Bilevel Optimization (ADBO) algorithm. The proposed ADBO can tackle bilevel optimization problems with both nonconvex upper-level and lower-level objective functions, and its convergence is theoretically guaranteed. Furthermore, it is revealed through theoretic analysis that the iteration complexity of ADBO to obtain the $\epsilon$-stationary point is upper bounded by $\mathcal{O}(\frac{1}{{{\epsilon ^2}}})$. Thorough empirical studies on public datasets have been conducted to elucidate the effectiveness and efficiency of the proposed ADBO.
translated by 谷歌翻译
The dual-encoder has become the de facto architecture for dense retrieval. Typically, it computes the latent representations of the query and document independently, thus failing to fully capture the interactions between the query and document. To alleviate this, recent work expects to get query-informed representations of documents. During training, it expands the document with a real query, while replacing the real query with a generated pseudo query at inference. This discrepancy between training and inference makes the dense retrieval model pay more attention to the query information but ignore the document when computing the document representation. As a result, it even performs worse than the vanilla dense retrieval model, since its performance depends heavily on the relevance between the generated queries and the real query. In this paper, we propose a curriculum sampling strategy, which also resorts to the pseudo query at training and gradually increases the relevance of the generated query to the real query. In this way, the retrieval model can learn to extend its attention from the document only to both the document and query, hence getting high-quality query-informed document representations. Experimental results on several passage retrieval datasets show that our approach outperforms the previous dense retrieval methods1.
translated by 谷歌翻译
Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
translated by 谷歌翻译
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model. Traditional knowledge distillation methods include response-based methods and feature-based methods. Response-based methods are used the most widely but suffer from lower upper limit of model performance, while feature-based methods have constraints on the vocabularies and tokenizers. In this paper, we propose a tokenizer-free method liberal feature-based distillation (LEAD). LEAD aligns the distribution between teacher model and student model, which is effective, extendable, portable and has no requirements on vocabularies, tokenizer, or model architecture. Extensive experiments show the effectiveness of LEAD on several widely-used benchmarks, including MS MARCO Passage, TREC Passage 19, TREC Passage 20, MS MARCO Document, TREC Document 19 and TREC Document 20.
translated by 谷歌翻译
知识蒸馏是将知识从强大的教师转移到有效的学生模型的有效方法。理想情况下,我们希望老师越好,学生越好。但是,这种期望并不总是成真。通常,由于教师和学生之间的不可忽略的差距,更好的教师模型通过蒸馏导致不良学生。为了弥合差距,我们提出了一种渐进式蒸馏方法,以进行致密检索。产品由教师渐进式蒸馏和数据进行渐进的蒸馏组成,以逐步改善学生。我们对五个广泛使用的基准,MARCO通道,TREC Passage 19,TREC文档19,MARCO文档和自然问题进行了广泛的实验,其中POD在蒸馏方法中实现了密集检索的最新方法。代码和模型将发布。
translated by 谷歌翻译
鉴于大规模系统的输出度量的意外变化,重要的是要回答发生变化的原因很重要:哪些输入导致了度量的变化?此类归因问题的一个关键组成部分是估计反事实:由于单个输入的指定变化,系统度量的(假设)变化。但是,由于系统部分之间的固有随机性和复杂的相互作用,很难直接对输出度量进行建模。我们利用系统的计算结构将建模任务分解为子部分,因此每个子部分对应于一个更稳定的机制,可以随着时间的推移准确地对其进行准确的建模。使用系统的结构还有助于将指标视为结构性因果模型(SCM)的计算,从而提供了一种原则上的估计反事实的方式。具体而言,我们提出了一种使用时间序列预测模型估算反事实的方法,并构建归因得分CF-Shapley,这与理想的公理一致,以归因于观察到的输出度量的变化。与过去关于因果沙普利值的工作不同,我们提出的方法可以归因于观察到的单个输出变化(而不是人口级效应),因此在模拟数据集上评估时提供了更准确的归因分数。作为现实世界应用,我们分析了一个查询AD匹配系统,其目的是归因于AD匹配密度的度量标准的观察到的变化。归因分数解释了来自不同查询类别的查询量和广告需求如何影响AD匹配密度,从而导致可行的见解,并发现外部事件(例如“ Cheetah Day”)在推动匹配密度中的作用(例如“ Cheetah Day”)。
translated by 谷歌翻译
我们提出了一种惩罚的非参数方法,以使用整流器二次单元(REEND)激活了深层神经网络,以估计不可分割的模型中的分位数回归过程(QRP),并引入了新的惩罚函数,以实施对瓦解回归曲线的非交叉。我们为估计的QRP建立了非反应过量的风险界限,并在轻度平滑度和规律性条件下得出估计的QRP的平均综合平方误差。为了建立这些非反应风险和估计误差范围,我们还使用$ s> 0 $及其衍生物及其衍生物使用所需的激活的神经网络开发了一个新的错误,用于近似$ c^s $平滑功能。这是必需网络的新近似结果,并且具有独立的兴趣,并且可能在其他问题中有用。我们的数值实验表明,所提出的方法具有竞争性或胜过两种现有方法,包括使用再现核和随机森林的方法,用于非参数分位数回归。
translated by 谷歌翻译
极端分类(XC)试图用最大的标签集中标记标签的子集标记数据点。通过使用稀疏,手工制作的功能的XC方法优越,用密集,学习的数据来进行深度XC,以数据点和标签的形式吸引了很多关注。负挖掘技术已成为所有深XC方法的关键组成部分,使它们可以扩展到数百万个标签。然而,尽管最近进步,但培训具有大型编码器体系结构(例如变形金刚)的深入XC模型仍然具有挑战性。本文确定,流行负面挖掘技术的内存通常迫使小型批量尺寸保持小且缓慢的训练。作为回应,本文介绍了Ngame,这是一种轻巧的迷你批次创建技术,可证明可证明准确的内部负面样品。这使得与现有负面采样技术相比,具有更大的迷你批次培训,提供更快的收敛性和更高的精度。发现Ngame的准确性比各种基准数据集的最先进方法要高16%,以进行极端分类,并且在回答搜索引擎查询以响应用户网页时检索搜索引擎查询更准确3%显示个性化广告。在流行搜索引擎的实时A/B测试中,Ngame在点击率率中的收益最高可达23%。
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
条件分布是描述响应与预测因子之间关系的基本数量。我们提出了一种学习条件分布的Wasserstein生成方法。所提出的方法使用条件发生器将已知分布转换为目标条件分布。通过匹配涉及条件发生器和目标关节分布的联合分布估计条件发生器,使用Wassersein距离作为这些关节分布的差异测量。我们建立了所提出的方法产生的条件采样分布的非渐近误差,并表明它能够减轻维度的诅咒,假设数据分布被支持在低维集上。我们进行数值实验以验证提出的方法,并将其应用于条件采样生成,非参数条件密度估计,预测不确定性量化,二抗体响应数据,图像重构和图像生成的应用。
translated by 谷歌翻译